这项工作介绍了一种新颖的原则,我们通过机制稀疏正规调用解剖学,基于高级概念的动态往往稀疏的想法。我们提出了一种表示学习方法,可以通过同时学习与它们相关的潜在因子和稀疏因果图形模型来引起解剖学。我们开发了一个严谨的可识别性理论,建立在最近的非线性独立分量分析(ICA)结果中,结果是模拟这一原理,并展示了如何恢复潜在变量,如果一个规则大致潜在机制为稀疏,如果某些图形连接标准通过数据生成过程满足。作为我们框架的特殊情况,我们展示了如何利用未知目标的干预措施来解除潜在因子,从而借鉴ICA和因果关系之间的进一步联系。我们还提出了一种基于VAE的方法,其中通过二进制掩码来学习和正规化潜在机制,并通过表明它学会在模拟中的解散表示来验证我们的理论。
translated by 谷歌翻译
已知应用于任务序列的标准梯度下降算法可在深层神经网络中产生灾难性遗忘。当对序列中的新任务进行培训时,该模型会在当前任务上更新其参数,从而忘记过去的知识。本文探讨了我们在有限环境中扩展任务数量的方案。这些方案由与重复数据的长期任务组成。我们表明,在这种情况下,随机梯度下降可以学习,进步并融合到根据现有文献需要持续学习算法的解决方案。换句话说,我们表明该模型在没有特定的记忆机制的情况下执行知识保留和积累。我们提出了一个新的实验框架,即Scole(缩放量表),以研究在潜在无限序列中的知识保留和算法的积累。为了探索此设置,我们对1,000个任务的序列进行了大量实验,以更好地了解这种新的设置家庭。我们还提出了对香草随机梯度下降的轻微修改,以促进这种情况下的持续学习。 SCOLE框架代表了对实用训练环境的良好模拟,并允许长序列研究收敛行为。我们的实验表明,在短方案上以前的结果不能总是推断为更长的场景。
translated by 谷歌翻译
大规模预训练的快速开发导致基础模型可以充当各种下游任务和领域的有效提取器。在此激励的情况下,我们研究了预训练的视觉模型的功效,作为下游持续学习(CL)场景的基础。我们的目标是双重的。首先,我们想了解RAW-DATA空间中CL和预训练编码器的潜在空间之间CL之间的计算准确性权衡。其次,我们研究编码器的特征,训练算法和数据以及所得的潜在空间如何影响CL性能。为此,我们将各种预训练的模型在大规模基准测试方案中的功效与在潜在和原始数据空间中应用的香草重播设置的功效。值得注意的是,这项研究表明了转移,遗忘,任务相似性和学习如何取决于输入数据特征,而不一定取决于CL算法。首先,我们表明,在某些情况下,通过可忽略的计算中的非参数分类器可以很容易地实现合理的CL性能。然后,我们展示模型如何在更广泛的数据上进行预训练,从而为各种重播大小提供更好的性能。我们以这些表示形式的代表性相似性和传递属性来解释这一点。最后,与训练域相比,我们显示了自我监督预训练对下游域的有效性。我们指出并验证了几个研究方向,这些方向可以进一步提高潜在CL的功效,包括表示结合。本研究中使用的各种数据集可以用作进一步CL研究的计算效率游乐场。该代码库可在https://github.com/oleksost/latent_cl下获得。
translated by 谷歌翻译
尽管知识蒸馏有经验成功,但仍然缺乏理论基础,可以自然地导致计算廉价的实现。为了解决这一问题,我们使用最近提出的熵函数来促进信息理论与知识蒸馏之间的替代联系。在这样做时,我们介绍了两个不同的互补损失,旨在最大限度地提高学生和教师陈述之间的相关性和互信。我们的方法对知识蒸馏和跨模型转移任务的最先进的竞争性能实现了最先进的,同时产生明显较低的培训开销,而不是密切相关和类似的方法。我们进一步展示了我们对二元蒸馏任务的方法的有效性,由此,我们将光线光到新的最先进的二进制量化。代码,评估协议和培训的型号将公开可用。
translated by 谷歌翻译
In recent years, reinforcement learning (RL) has become increasingly successful in its application to science and the process of scientific discovery in general. However, while RL algorithms learn to solve increasingly complex problems, interpreting the solutions they provide becomes ever more challenging. In this work, we gain insights into an RL agent's learned behavior through a post-hoc analysis based on sequence mining and clustering. Specifically, frequent and compact subroutines, used by the agent to solve a given task, are distilled as gadgets and then grouped by various metrics. This process of gadget discovery develops in three stages: First, we use an RL agent to generate data, then, we employ a mining algorithm to extract gadgets and finally, the obtained gadgets are grouped by a density-based clustering algorithm. We demonstrate our method by applying it to two quantum-inspired RL environments. First, we consider simulated quantum optics experiments for the design of high-dimensional multipartite entangled states where the algorithm finds gadgets that correspond to modern interferometer setups. Second, we consider a circuit-based quantum computing environment where the algorithm discovers various gadgets for quantum information processing, such as quantum teleportation. This approach for analyzing the policy of a learned agent is agent and environment agnostic and can yield interesting insights into any agent's policy.
translated by 谷歌翻译
The integration of data and knowledge from several sources is known as data fusion. When data is available in a distributed fashion or when different sensors are used to infer a quantity of interest, data fusion becomes essential. In Bayesian settings, a priori information of the unknown quantities is available and, possibly, shared among the distributed estimators. When the local estimates are fused, such prior might be overused unless it is accounted for. This paper explores the effects of shared priors in Bayesian data fusion contexts, providing fusion rules and analysis to understand the performance of such fusion as a function of the number of collaborative agents and the uncertainty of the priors. Analytical results are corroborated through experiments in a variety of estimation and classification problems.
translated by 谷歌翻译
Methods of pattern recognition and machine learning are applied extensively in science, technology, and society. Hence, any advances in related theory may translate into large-scale impact. Here we explore how algorithmic information theory, especially algorithmic probability, may aid in a machine learning task. We study a multiclass supervised classification problem, namely learning the RNA molecule sequence-to-shape map, where the different possible shapes are taken to be the classes. The primary motivation for this work is a proof of concept example, where a concrete, well-motivated machine learning task can be aided by approximations to algorithmic probability. Our approach is based on directly estimating the class (i.e., shape) probabilities from shape complexities, and using the estimated probabilities as a prior in a Gaussian process learning problem. Naturally, with a large amount of training data, the prior has no significant influence on classification accuracy, but in the very small training data regime, we show that using the prior can substantially improve classification accuracy. To our knowledge, this work is one of the first to demonstrate how algorithmic probability can aid in a concrete, real-world, machine learning problem.
translated by 谷歌翻译
Continual Learning, also known as Lifelong or Incremental Learning, has recently gained renewed interest among the Artificial Intelligence research community. Recent research efforts have quickly led to the design of novel algorithms able to reduce the impact of the catastrophic forgetting phenomenon in deep neural networks. Due to this surge of interest in the field, many competitions have been held in recent years, as they are an excellent opportunity to stimulate research in promising directions. This paper summarizes the ideas, design choices, rules, and results of the challenge held at the 3rd Continual Learning in Computer Vision (CLVision) Workshop at CVPR 2022. The focus of this competition is the complex continual object detection task, which is still underexplored in literature compared to classification tasks. The challenge is based on the challenge version of the novel EgoObjects dataset, a large-scale egocentric object dataset explicitly designed to benchmark continual learning algorithms for egocentric category-/instance-level object understanding, which covers more than 1k unique main objects and 250+ categories in around 100k video frames.
translated by 谷歌翻译
Deep learning models have shown promising results in recognizing depressive states using video-based facial expressions. While successful models typically leverage using 3D-CNNs or video distillation techniques, the different use of pretraining, data augmentation, preprocessing, and optimization techniques across experiments makes it difficult to make fair architectural comparisons. We propose instead to enhance two simple models based on ResNet-50 that use only static spatial information by using two specific face alignment methods and improved data augmentation, optimization, and scheduling techniques. Our extensive experiments on benchmark datasets obtain similar results to sophisticated spatio-temporal models for single streams, while the score-level fusion of two different streams outperforms state-of-the-art methods. Our findings suggest that specific modifications in the preprocessing and training process result in noticeable differences in the performance of the models and could hide the actual originally attributed to the use of different neural network architectures.
translated by 谷歌翻译
The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.
translated by 谷歌翻译